Supervised Spectral Latent Variable Models
نویسندگان
چکیده
We present a probabilistic structured prediction method for learning input-output dependencies where correlations between outputs are modeled as low-dimensional manifolds constrained by both geometric, distance preserving output relations, and predictive power of inputs. Technically this reduces to learning a probabilistic, input conditional model, over latent (manifold) and output variables using an alternation scheme. In one round, we optimize the parameters of an input-driven manifold predictor using latent targets given by preimages (conditional expectations) of the current manifold-to-output model. In the next round, we use the distribution given by the manifold predictor in order to maximize the probability of the outputs with an additional, implicit geometry preserving constraint on the manifold. The resulting Supervised Spectral Latent Variable Model (SSLVM) combines the properties of probabilistic geometric manifold learning (accommodates geometric constraints corresponding to any spectral embedding method including PCA, ISOMAP or Laplacian Eigenmaps), with the additional supervisory information to further constrain it for predictive tasks. We demonstrate the superiority of the method over baseline PPCA + regression frameworks and show its potential in difficult real-world computer vision benchmarks designed for the reconstruction of three-dimensional human poses from monocular image sequences. Appearing in Proceedings of the 12 International Conference on Artificial Intelligence and Statistics (AISTATS) 2009, Clearwater Beach, Florida, USA. Volume 5 of JMLR: W&CP 5. Copyright 2009 by the authors.
منابع مشابه
Spectral Learning of General Latent-Variable Probabilistic Graphical Models:A Supervised Learning Approach
In this CS 229 project, I designed, proved and tested a new spectral learning algorithm for learning probabilistic graphical models with latent variables by reducing the hard learning problem into a pipeline of supervised learning tasks. This new algorithmic framework can provide us with more learning power by giving us the freedom to plug in all different kinds of supervised learning algorithm...
متن کاملSpectral Methods for Supervised Topic Models
Supervised topic models simultaneously model the latent topic structure of large collections of documents and a response variable associated with each document. Existing inference methods are based on either variational approximation or Monte Carlo sampling. This paper presents a novel spectral decomposition algorithm to recover the parameters of supervised latent Dirichlet allocation (sLDA) mo...
متن کاملExtending Spectral Methods to New Latent Variable Models
Latent variable models are widely used in industry and research, though the problem of estimating their parameters has remained challenging; standard techniques (e.g., Expectation-Maximization) offer weak guarantees of optimality. There is a growing body of work reducing latent variable estimation problems to a certain(orthogonal) spectral decompositions of symmetric tensors derived from the mo...
متن کاملAnchored Discrete Factor Analysis
We present a semi-supervised learning algorithm for learning discrete factor analysis models with arbitrary structure on the latent variables. Our algorithm assumes that every latent variable has an “anchor”, an observed variable with only that latent variable as its parent. Given such anchors, we show that it is possible to consistently recover moments of the latent variables and use these mom...
متن کاملAccuracy of latent-variable estimation in Bayesian semi-supervised learning
Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable on...
متن کامل